fuel economy
Trump Wants to Trade Fuel Economy for Cheaper Cars. But It Might Not Work
By rolling back auto industry fuel efficiency goals, US president Donald Trump hopes to make new cars cheaper. But prices won't drop for years, and consumers will spend more on gas in the meantime. The Trump administration says its proposal to roll back vehicle fuel economy standards, announced officially in the Oval Office on Wednesday, is an attempt to shave dollars off the ballooning cost of new cars in the US. But the intended price drops likely won't show up on dealership lots and showroom floors for months if not years, given the length of automakers' product planning schedule. It would also likely force Americans to pay more, long-term, at another place they tend to visit more frequently: the pump.
- Asia > Nepal (0.15)
- North America > United States > Louisiana (0.05)
- North America > United States > Virginia (0.05)
- (5 more...)
- Transportation (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Automobiles & Trucks (1.00)
Energy Management of Multi-mode Plug-in Hybrid Electric Vehicle using Multi-agent Deep Reinforcement Learning
Hua, Min, Zhang, Cetengfei, Zhang, Fanggang, Li, Zhi, Yu, Xiaoli, Xu, Hongming, Zhou, Quan
The recently emerging multi-mode plug-in hybrid electric vehicle (PHEV) technology is one of the pathways making contributions to decarbonization, and its energy management requires multiple-input and multipleoutput (MIMO) control. At the present, the existing methods usually decouple the MIMO control into singleoutput (MISO) control and can only achieve its local optimal performance. To optimize the multi-mode vehicle globally, this paper studies a MIMO control method for energy management of the multi-mode PHEV based on multi-agent deep reinforcement learning (MADRL). By introducing a relevance ratio, a hand-shaking strategy is proposed to enable two learning agents to work collaboratively under the MADRL framework using the deep deterministic policy gradient (DDPG) algorithm. Unified settings for the DDPG agents are obtained through a sensitivity analysis of the influencing factors to the learning performance. The optimal working mode for the hand-shaking strategy is attained through a parametric study on the relevance ratio. The advantage of the proposed energy management method is demonstrated on a software-in-the-loop testing platform. The result of the study indicates that the learning rate of the DDPG agents is the greatest influencing factor for learning performance. Using the unified DDPG settings and a relevance ratio of 0.2, the proposed MADRL system can save up to 4% energy compared to the single-agent learning system and up to 23.54% energy compared to the conventional rule-based system.
- Europe > United Kingdom (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
Review: The 2023 Cadillac CT4-V is a semi-autonomous American sport sedan
The 2023 Cadillac CT4-V is a compact sport sedan that can be equipped with the Super Cruise hands-free highway driving system, which works on 400,000 miles of roads. The Cadillac CT4-V was very much designed to be a driver's car, but it can drive itself … some of the time. The compact sports sedan is available with the latest version of Cadillac's Super Cruise highway driving system, which provides hands-free lane-centering adaptive cruise control. Super Cruise uses high definition cameras, ultrasonic sensors and radar to keep an eye on its surroundings, while hyper-accurate GPS maps help position it on the road. It now works on over 400,000 miles of certified roads across North America and can also check for traffic and change lanes at the flick of the turn signal or automatically if there is a slower car in front of it.
- North America > United States > New Jersey (0.15)
- North America > United States > Michigan (0.05)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
Safe Reinforcement Learning for an Energy-Efficient Driver Assistance System
Hailemichael, Habtamu, Ayalew, Beshah, Kerbel, Lindsey, Ivanco, Andrej, Loiselle, Keith
Reinforcement learning (RL)-based driver assistance systems seek to improve fuel consumption via continual improvement of powertrain control actions considering experiential data from the field. However, the need to explore diverse experiences in order to learn optimal policies often limits the application of RL techniques in safety-critical systems like vehicle control. In this paper, an exponential control barrier function (ECBF) is derived and utilized to filter unsafe actions proposed by an RL-based driver assistance system. The RL agent freely explores and optimizes the performance objectives while unsafe actions are projected to the closest actions in the safe domain. The reward is structured so that driver's acceleration requests are met in a manner that boosts fuel economy and doesn't compromise comfort. The optimal gear and traction torque control actions that maximize the cumulative reward are computed via the Maximum a Posteriori Policy Optimization (MPO) algorithm configured for a hybrid action space. The proposed safe-RL scheme is trained and evaluated in car following scenarios where it is shown that it effectively avoids collision both during training and evaluation while delivering on the expected fuel economy improvements for the driver assistance system.
- North America > United States > South Carolina > Greenville County > Greenville (0.04)
- North America > United States > Indiana > Marion County > Indianapolis (0.04)
- Energy (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
- Transportation > Ground > Road (0.68)
Residual Policy Learning for Powertrain Control
Kerbel, Lindsey, Ayalew, Beshah, Ivanco, Andrej, Loiselle, Keith
Eco-driving strategies have been shown to provide significant reductions in fuel consumption. This paper outlines an active driver assistance approach that uses a residual policy learning (RPL) agent trained to provide residual actions to default power train controllers while balancing fuel consumption against other driver-accommodation objectives. Using previous experiences, our RPL agent learns improved traction torque and gear shifting residual policies to adapt the operation of the powertrain to variations and uncertainties in the environment. For comparison, we consider a traditional reinforcement learning (RL) agent trained from scratch. Both agents employ the off-policy Maximum A Posteriori Policy Optimization algorithm with an actor-critic architecture. By implementing on a simulated commercial vehicle in various car-following scenarios, we find that the RPL agent quickly learns significantly improved policies compared to a baseline source policy but in some measures not as good as those eventually possible with the RL agent trained from scratch.
- North America > United States > South Carolina > Greenville County > Greenville (0.04)
- North America > United States > Indiana > Marion County > Indianapolis (0.04)
- Energy (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.67)
Progress and summary of reinforcement learning on energy management of MPS-EV
Hu, Jincheng, Lin, Yang, Chu, Liang, Hou, Zhuoran, Li, Jihan, Jiang, Jingjing, Zhang, Yuanjian
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
Human-like Energy Management Based on Deep Reinforcement Learning and Historical Driving Experiences
Liu, Teng, Tang, Xiaolin, Hu, Xiaosong, Tan, Wenhao, Zhang, Jinwei
Development of hybrid electric vehicles depends on an advanced and efficient energy management strategy (EMS). With online and real-time requirements in mind, this article presents a human-like energy management framework for hybrid electric vehicles according to deep reinforcement learning methods and collected historical driving data. The hybrid powertrain studied has a series-parallel topology, and its control-oriented modeling is founded first. Then, the distinctive deep reinforcement learning (DRL) algorithm, named deep deterministic policy gradient (DDPG), is introduced. To enhance the derived power split controls in the DRL framework, the global optimal control trajectories obtained from dynamic programming (DP) are regarded as expert knowledge to train the DDPG model. This operation guarantees the optimality of the proposed control architecture. Moreover, the collected historical driving data based on experienced drivers are employed to replace the DP-based controls, and thus construct the human-like EMSs. Finally, different categories of experiments are executed to estimate the optimality and adaptability of the proposed human-like EMS. Improvements in fuel economy and convergence rate indicate the effectiveness of the constructed control structure.
- Asia > China > Chongqing Province > Chongqing (0.05)
- Asia > China > Beijing > Beijing (0.04)
- North America > Canada (0.04)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Automobiles & Trucks (1.00)
Real-Time Monitoring and Driver Feedback to Promote Fuel Efficient Driving
Wickramanayake, Sandareka, Bandara, H. M. N Dilum, Samarasekara, Nishal A.
Improving the fuel efficiency of vehicles is imperative to reduce costs and protect the environment. While the efficient engine and vehicle designs, as well as intelligent route planning, are well-known solutions to enhance the fuel efficiency, research has also demonstrated that the adoption of fuel-efficient driving behaviors could lead to further savings. In this work, we propose a novel framework to promote fuel-efficient driving behaviors through real-time automatic monitoring and driver feedback. In this framework, a random-forest based classification model developed using historical data to identifies fuel-inefficient driving behaviors. The classifier considers driver-dependent parameters such as speed and acceleration/deceleration pattern, as well as environmental parameters such as traffic, road topography, and weather to evaluate the fuel efficiency of one-minute driving events. When an inefficient driving action is detected, a fuzzy logic inference system is used to determine what the driver should do to maintain fuel-efficient driving behavior. The decided action is then conveyed to the driver via a smartphone in a non-intrusive manner. Using a dataset from a long-distance bus, we demonstrate that the proposed classification model yields an accuracy of 85.2% while increasing the fuel efficiency up to 16.4%.
- Transportation > Ground > Road (1.00)
- Energy (1.00)
- Automobiles & Trucks (1.00)
Researchers at Argonne are developing the deep learning framework MaLTESE (Machine Learning Tool for Engine Simulations and Experiments) to meet ever-increasing demands to deliver better engine performance, fuel economy and reduced emissions.
Utilizing ALCF supercomputing resources, Argonne researchers are developing the deep learning framework MaLTESE with autonomous -- or self-driving -- and cloud-connected vehicles in mind. This work could help meet demand to deliver better engine performance, fuel economy and reduced emissions. Researchers used nearly the full capacity of the ALCF's Theta system to simulate a typical 25-minute drive cycle of 250,000 vehicles. Researchers at Argonne are developing the deep learning framework MaLTESE (Machine Learning Tool for Engine Simulations and Experiments) to meet ever-increasing demands to deliver better engine performance, fuel economy and reduced emissions. Automotive manufacturers are facing an ever-increasing demand to deliver better engine performance, fuel economy and reduced emissions.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.05)
- Automobiles & Trucks > Manufacturer (0.50)
- Government > Regional Government (0.35)
Emerging technologies take pole
Motorsport has long been at the bleeding edge of innovation and Brent Pittman, director of engineering, automotive and concept design at Autodesk suggests that remains the case. Motorsport is more than just blazing heat, screeching brakes, a roar of engines, and the test of a driver's skill and bravery. It is positioned as the pinnacle of technological innovation coming out of the automotive industry. But for a sport that uniquely has'Constructor Championships' to reward the work of the team behind the athlete, it is interesting that the value of new technologies hasn't been realised fully yet. Indeed, when it all boils down, an athlete may be the most talented individual, but it is technology that is the real driver behind the sport's success.
- Leisure & Entertainment > Sports > Motorsports (1.00)
- Automobiles & Trucks (1.00)